Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
-
In this paper, we present a polarimetric image restoration approach that aims to recover the Stokes parameters and the degree of linear polarization from their corresponding degraded counterparts. The Stokes parameters and the degree of linear polarization are affected due to the degradations present in partial occlusion or turbid media, such as scattering, attenuation, and turbid water. The polarimetric image restoration with corresponding Mueller matrix estimation is performed using polarization-informed deep learning and 3D Integral imaging. An unsupervised image-to-image translation (UNIT) framework is utilized to obtain clean Stokes parameters from the degraded ones. Additionally, a multi-output convolutional neural network (CNN) based branch is used to predict the Mueller matrix estimate along with an estimate of the corresponding residue. The degree of linear polarization with the Mueller matrix estimate generates information regarding the characteristics of the underlying transmission media and the object under consideration. The approach has been evaluated under different environmentally degraded conditions, such as various levels of turbidity and partial occlusion. The 3D integral imaging reduces the effects of degradations in a turbid medium. The performance comparison between 3D and 2D imaging in varying scene conditions is provided. Experimental results suggest that the proposed approach is promising under the scene degradations considered. To the best of our knowledge, this is the first report on polarization-informed deep learning in 3D imaging, which attempts to recover the polarimetric information along with the corresponding Mueller matrix estimate in a degraded environment.more » « less
-
In this paper, we propose a procedure to analyze lensless single random phase encoding (SRPE) systems to assess their robustness to variations in image sensor pixel size as the input signal frequency is varied. We use wave propagation to estimate the maximum pixel size to capture lensless SRPE intensity patterns such that an input signal frequency can be captured accurately. Lensless SRPE systems are contrived by placing a diffuser in front of an image sensor such that the optical field coming from an object can be modulated before its intensity signature is captured at the image sensor. Since diffuser surfaces contain very fine features, the captured intensity patterns always contain high spatial frequencies regardless of the input frequencies. Hence, a conventional Nyquist-criterion-based treatment of this problem would not give us a meaningful characterization. We propose a theoretical estimate on the upper limit of the image sensor pixel size such that the variations in the input signal are adequately captured in the sensor pixels. A numerical simulation of lensless SRPE systems using angular spectrum propagation and mutual information verifies our theoretical analysis. The simulation estimate of the sampling criterion matches very closely with our proposed theoretical estimate. We provide a closed-form estimate for the maximum sensor pixel size as a function of input frequency and system parameters such that an input signal frequency can be captured accurately, making it possible to optimize general-purpose SRPE systems. Our results show that lensless SRPE systems have a much greater robustness to sensor pixel size compared with lens based systems, which makes SRPE useful for exotic imagers when pixel size is large. To the best of our knowledge, this is the first report to investigate sampling of lensless SRPE systems as a function of input image frequency and physical parameters of the system to estimate the maximum image sensor pixel size.more » « less
-
Image restoration aims to recover a clean image given a noisy image. It has long been a topic of interest for researchers in imaging, optical science and computer vision. As the imaging environment becomes more and more deteriorated, the problem becomes more challenging. Several computational approaches, ranging from statistical to deep learning, have been proposed over the years to tackle this problem. The deep learning-based approaches provided promising image restoration results, but it’s purely data driven and the requirement of large datasets (paired or unpaired) for training might demean its utility for certain physical problems. Recently, physics informed image restoration techniques have gained importance due to their ability to enhance performance, infer some sense of the degradation process and its potential to quantify the uncertainty in the prediction results. In this paper, we propose a physics informed deep learning approach with simultaneous parameter estimation using 3D integral imaging and Bayesian neural network (BNN). An image-image mapping architecture is first pretrained to generate a clean image from the degraded image, which is then utilized for simultaneous training with Bayesian neural network for simultaneous parameter estimation. For the network training, simulated data using the physical model has been utilized instead of actual degraded data. The proposed approach has been tested experimentally under degradations such as low illumination and partial occlusion. The recovery results are promising despite training from a simulated dataset. We have tested the performance of the approach under varying levels of illumination condition. Additionally, the proposed approach also has been analyzed against corresponding 2D imaging-based approach. The results suggest significant improvements compared to 2D even training under similar datasets. Also, the parameter estimation results demonstrate the utility of the approach in estimating the degradation parameter in addition to image restoration under the experimental conditions considered.more » « less
-
Lensless devices paired with deep learning models have recently shown great promise as a novel approach to biological screening. As a first step toward performing automated lensless cell identification non-invasively, we present a field-portable, compact lensless system that can detect and classify smeared whole blood samples through layers of scattering media. In this system, light from a partially coherent laser diode propagates through the sample, which is positioned between two layers of scattering media, and the resultant opto-biological signature is captured by an image sensor. The signature is transformed via local binary pattern (LBP) transformation, and the resultant LBP images are processed by a convolutional neural network (CNN) to identify the type of red blood cells in the sample. We validated our system in an experimental setup where whole blood samples are placed between two diffusive layers of increasing thickness, and the robustness of the system against variations in the layer thickness is investigated. Several CNN models were considered (i.e., AlexNet, VGG-16, and SqueezeNet), individually optimized, and compared against a traditional learning model that consists of principal component decomposition and support vector machine (PCA + SVM). We found that a two-stage SqueezeNet architecture and VGG-16 provide the highest classification accuracy and Matthew’s correlation coefficient (MCC) score when applied to images acquired by our lensless system, with SqueezeNet outperforming the other classifiers when the thickness of the scattering layer is the same in training and test data (accuracy: 97.2%; MCC: 0.96), and VGG-16 resulting the most robust option as the thickness of the scattering layers in test data increases up to three times the value used during training. Altogether, this work provides proof-of-concept for non-invasive blood sample identification through scattering media with lensless devices using deep learning. Our system has the potential to be a viable diagnosis device because of its low cost, field portability, and high identification accuracy.more » « less
-
In this paper, we assess the noise-susceptibility of coherent macroscopic single random phase encoding (SRPE) lensless imaging by analyzing how much information is lost due to the presence of camera noise. We have used numerical simulation to first obtain the noise-free point spread function (PSF) of a diffuser-based SRPE system. Afterwards, we generated a noisy PSF by introducing shot noise, read noise and quantization noise as seen in a real-world camera. Then, we used various statistical measures to look at how the shared information content between the noise-free and noisy PSF is affected as the camera-noise becomes stronger. We have run identical simulations by replacing the diffuser in the lensless SRPE imaging system with lenses for comparison with lens-based imaging. Our results show that SRPE lensless imaging systems are better at retaining information between corresponding noisy and noiseless PSFs under high camera noise than lens-based imaging systems. We have also looked at how physical parameters of diffusers such as feature size and feature height variation affect the noise robustness of an SRPE system. To the best of our knowledge, this is the first report to investigate noise robustness of SRPE systems as a function of diffuser parameters and paves the way for the use of lensless SRPE systems to improve imaging in the presence of image sensor noise.more » « less
-
We propose a diffuser-based lensless underwater optical signal detection system. The system consists of a lensless one-dimensional (1D) camera array equipped with random phase modulators for signal acquisition and one-dimensional integral imaging convolutional neural network (1DInImCNN) for signal classification. During the acquisition process, the encoded signal transmitted by a light-emitting diode passes through a turbid medium as well as partial occlusion. The 1D diffuser-based lensless camera array is used to capture the transmitted information. The captured pseudorandom patterns are then classified through the 1DInImCNN to output the desired signal. We compared our proposed underwater lensless optical signal detection system with an equivalent lens-based underwater optical signal detection system in terms of detection performance and computational cost. The results show that the former outperforms the latter. Moreover, we use dimensionality reduction on the lensless pattern and study their theoretical computational costs and detection performance. The results show that the detection performance of lensless systems does not suffer appreciably. This makes lensless systems a great candidate for low-cost compressive underwater optical imaging and signal detection.more » « less
-
Image restoration and denoising has been a challenging problem in optics and computer vision. There has been active research in the optics and imaging communities to develop a robust, data-efficient system for image restoration tasks. Recently, physics-informed deep learning has received wide interest in scientific problems. In this paper, we introduce a three-dimensional integral imaging-based physics-informed unsupervised CycleGAN algorithm for underwater image descattering and recovery using physics-informed CycleGAN (Generative Adversarial Network). The system consists of a forward and backward pass. The base architecture consists of an encoder and a decoder. The encoder takes the clean image along with the depth map and the degradation parameters to produce the degraded image. The decoder takes the degraded image generated by the encoder along with the depth map and produces the clean image along with the degradation parameters. In order to provide physical significance for the input degradation parameter w.r.t a physical model for the degradation, we also incorporated the physical model into the loss function. The proposed model has been assessed under the dataset curated through underwater experiments at various levels of turbidity. In addition to recovering the original image from the degraded image, the proposed algorithm also helps to model the distribution under which the degraded images have been sampled. Furthermore, the proposed three-dimensional Integral Imaging approach is compared with the traditional deep learning-based approach and 2D imaging approach under turbid and partially occluded environments. The results suggest the proposed approach is promising, especially under the above experimental conditions.more » « less
An official website of the United States government

Full Text Available